由于成本,采集时间或剂量的限制,稀疏视图CT重建在广泛的应用中是重要的。然而,传统的直接重建方法如滤波后投影(FBP)导致子奈奎斯特政权中的低质量重建。相比之下,深度神经网络(DNN)可以从稀疏和嘈杂的数据产生高质量的重建,例如,通过FBP重建的后处理,作为基于模型的迭代重建(MBIR),尽管计算成本更高。在本文中,我们介绍了一种称为反复间隔的DNN方法,称为反复化堆叠的背部投影(RSBP),其使用顺序获取的单个视图的反投影作为反复卷积LSTM网络的输入。 SBP结构维护SinoGram中的所有信息,而经常性处理利用相邻视图之间的相关性并在每个新视图后产生更新的重建。我们在模拟和实际数据上培训我们的网络上的网络和测试,并证明RSBP优于FBP图像和基本MBIR的DNN后处理,其计算成本低于MBIR。
translated by 谷歌翻译
放射造影通常用于探测动态系统中的复杂,不断发展的密度字段,以便在潜在的物理学中实现进入洞察力。该技术已用于许多领域,包括材料科学,休克物理,惯性监禁融合和其他国家安全应用。然而,在许多这些应用中,噪声,散射,复杂光束动力学等的并发症防止了密度的重建足以足以识别具有足够置信度的底层物理。因此,来自静态/动态射线照相的密度重建通常限于在许多这些应用中识别诸如裂缝和空隙的不连续特征。在这项工作中,我们提出了一种从基本上重建密度的基本上新的射线照片序列的密度。仅使用射线照相识别的稳健特征,我们将它们与使用机器学习方法的底层流体动力方程组合,即条件生成对冲网络(CGAN),以从射线照片的动态序列确定密度字段。接下来,我们寻求通过参数估计和投影的过程进一步提高ML的密度重建的流体动力学一致性,并进入流体动力歧管。在这种情况下,我们注意到,训练数据给出的流体动力歧管在被认为的参数空间中给出的测试数据是用于预测的稳定性的诊断,并用于增强培训数据库,期望后者将进一步降低未来的密度重建错误。最后,我们展示了这种方法优于传统的射线照相重建在捕获允许的流体动力学路径中的能力,即使存在相对少量的散射。
translated by 谷歌翻译
在许多计算机断层扫描(CT)成像应用程序中,重要的是快速收集来自移动或随时间变化的对象的数据。通常假设断层图像是逐步拍摄的,其中物体旋转到每个期望的角度,并且拍摄视图。然而,阶梯和射击采集缓慢并且可以浪费光子,因此在实践中,在收集数据的同时连续旋转物体的情况下进行飞行扫描。然而,这可能导致运动模糊的视图,从而与严重运动伪影进行重建。在本文中,我们介绍了Codex,一个模块化框架,用于联合去模糊和断层切断重建,可以有效地颠倒在扫描中引入的运动模糊。该方法是具有新型非凸贝叶斯重建算法的新型采集方法的协同组合。 Codex通过使用重建算法的已知二进制代码编码采集而作证,然后重转反转。使用良好选择的二进制代码进行编码测量可以提高反转过程的准确性。 Codex重建方法使用乘法器(ADMM)的交替方向方法将逆问题分成迭代解训和重建子问题,使重建实用实现。我们对模拟和实验数据的重建结果显示了我们方法的有效性。
translated by 谷歌翻译
This work builds on the models and concepts presented in part 1 to learn approximate dictionary representations of Koopman operators from data. Part I of this paper presented a methodology for arguing the subspace invariance of a Koopman dictionary. This methodology was demonstrated on the state-inclusive logistic lifting (SILL) basis. This is an affine basis augmented with conjunctive logistic functions. The SILL dictionary's nonlinear functions are homogeneous, a norm in data-driven dictionary learning of Koopman operators. In this paper, we discover that structured mixing of heterogeneous dictionary functions drawn from different classes of nonlinear functions achieve the same accuracy and dimensional scaling as the deep-learning-based deepDMD algorithm. We specifically show this by building a heterogeneous dictionary comprised of SILL functions and conjunctive radial basis functions (RBFs). This mixed dictionary achieves the same accuracy and dimensional scaling as deepDMD with an order of magnitude reduction in parameters, while maintaining geometric interpretability. These results strengthen the viability of dictionary-based Koopman models to solving high-dimensional nonlinear learning problems.
translated by 谷歌翻译
Koopman operators model nonlinear dynamics as a linear dynamic system acting on a nonlinear function as the state. This nonstandard state is often called a Koopman observable and is usually approximated numerically by a superposition of functions drawn from a dictionary. In a widely used algorithm, Extended Dynamic Mode Decomposition, the dictionary functions are drawn from a fixed class of functions. Recently, deep learning combined with EDMD has been used to learn novel dictionary functions in an algorithm called deep dynamic mode decomposition (deepDMD). The learned representation both (1) accurately models and (2) scales well with the dimension of the original nonlinear system. In this paper we analyze the learned dictionaries from deepDMD and explore the theoretical basis for their strong performance. We explore State-Inclusive Logistic Lifting (SILL) dictionary functions to approximate Koopman observables. Error analysis of these dictionary functions show they satisfy a property of subspace approximation, which we define as uniform finite approximate closure. Our results provide a hypothesis to explain the success of deep neural networks in learning numerical approximations to Koopman operators. Part 2 of this paper will extend this explanation by demonstrating the subspace invariant of heterogeneous dictionaries and presenting a head-to-head numerical comparison of deepDMD and low-parameter heterogeneous dictionary learning.
translated by 谷歌翻译
We present a unified probabilistic model that learns a representative set of discrete vehicle actions and predicts the probability of each action given a particular scenario. Our model also enables us to estimate the distribution over continuous trajectories conditioned on a scenario, representing what each discrete action would look like if executed in that scenario. While our primary objective is to learn representative action sets, these capabilities combine to produce accurate multimodal trajectory predictions as a byproduct. Although our learned action representations closely resemble semantically meaningful categories (e.g., "go straight", "turn left", etc.), our method is entirely self-supervised and does not utilize any manually generated labels or categories. Our method builds upon recent advances in variational inference and deep unsupervised clustering, resulting in full distribution estimates based on deterministic model evaluations.
translated by 谷歌翻译
Neural networks have revolutionized the area of artificial intelligence and introduced transformative applications to almost every scientific field and industry. However, this success comes at a great price; the energy requirements for training advanced models are unsustainable. One promising way to address this pressing issue is by developing low-energy neuromorphic hardware that directly supports the algorithm's requirements. The intrinsic non-volatility, non-linearity, and memory of spintronic devices make them appealing candidates for neuromorphic devices. Here we focus on the reservoir computing paradigm, a recurrent network with a simple training algorithm suitable for computation with spintronic devices since they can provide the properties of non-linearity and memory. We review technologies and methods for developing neuromorphic spintronic devices and conclude with critical open issues to address before such devices become widely used.
translated by 谷歌翻译
Behavioural cloning (BC) is a commonly used imitation learning method to infer a sequential decision-making policy from expert demonstrations. However, when the quality of the data is not optimal, the resulting behavioural policy also performs sub-optimally once deployed. Recently, there has been a surge in offline reinforcement learning methods that hold the promise to extract high-quality policies from sub-optimal historical data. A common approach is to perform regularisation during training, encouraging updates during policy evaluation and/or policy improvement to stay close to the underlying data. In this work, we investigate whether an offline approach to improving the quality of the existing data can lead to improved behavioural policies without any changes in the BC algorithm. The proposed data improvement approach - Trajectory Stitching (TS) - generates new trajectories (sequences of states and actions) by `stitching' pairs of states that were disconnected in the original data and generating their connecting new action. By construction, these new transitions are guaranteed to be highly plausible according to probabilistic models of the environment, and to improve a state-value function. We demonstrate that the iterative process of replacing old trajectories with new ones incrementally improves the underlying behavioural policy. Extensive experimental results show that significant performance gains can be achieved using TS over BC policies extracted from the original data. Furthermore, using the D4RL benchmarking suite, we demonstrate that state-of-the-art results are obtained by combining TS with two existing offline learning methodologies reliant on BC, model-based offline planning (MBOP) and policy constraint (TD3+BC).
translated by 谷歌翻译
Several works have proven that finetuning is an applicable approach for debiasing contextualized word embeddings. Similarly, discrete prompts with semantic meanings have shown to be effective in debiasing tasks. With unfixed mathematical representation at the token level, continuous prompts usually surpass discrete ones at providing a pre-trained language model (PLM) with additional task-specific information. Despite this, relatively few efforts have been made to debias PLMs by prompt tuning with continuous prompts compared to its discrete counterpart. Furthermore, for most debiasing methods that alter a PLM's original parameters, a major problem is the need to not only decrease the bias in the PLM but also to ensure that the PLM does not lose its representation ability. Finetuning methods typically have a hard time maintaining this balance, as they tend to violently remove meanings of attribute words. In this paper, we propose ADEPT, a method to debias PLMs using prompt tuning while maintaining the delicate balance between removing biases and ensuring representation ability. To achieve this, we propose a new training criterion inspired by manifold learning and equip it with an explicit debiasing term to optimize prompt tuning. In addition, we conduct several experiments with regard to the reliability, quality, and quantity of a previously proposed attribute training corpus in order to obtain a clearer prototype of a certain attribute, which indicates the attribute's position and relative distances to other words on the manifold. We evaluate ADEPT on several widely acknowledged debiasing benchmarks and downstream tasks, and find that it achieves competitive results while maintaining (and in some cases even improving) the PLM's representation ability. We further visualize words' correlation before and after debiasing a PLM, and give some possible explanations for the visible effects.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译